gflownet foundation
70-Page Paper From Yoshua Bengio Team: GFlowNet Foundations
There's no slowing down the godfathers of deep learning, who continue to innovate. Several years ago Geoffrey Hinton introduced Capsule Networks (CapsNets) for dynamic image modelling, and this past summer a Yoshua Bengio team proposed Generative Flow Networks (GFlowNets), a low-network-based generative method that can turn a given positive reward into a generative policy that samples with a probability proportional to the return. GFlowNets achieve competitive results on molecule synthesis domain tasks and perform well on a simple domain where there are many modes to the reward function. In the new paper GFlowNet Foundations, a research team from Mila, University of Montreal, McGill University, Stanford University, CIFAR and Microsoft Azure AI builds upon GFlowNets, providing an in-depth formal foundation and expansion of the set of theoretical results for a broad range of scenarios, especially active learning. GFlowNets are inspired by the way information propagates in temporal-difference reinforcement learning.